Vehicle-to-Everything (V2X) communication has been proposed as a potential solution to improve the robustness and safety of autonomous vehicles by improving coordination and removing the barrier of non-line-of-sight sensing. Cooperative Vehicle Safety (CVS) applications are tightly dependent on the reliability of the underneath data system, which can suffer from loss of information due to the inherent issues of their different components, such as sensors failures or the poor performance of V2X technologies under dense communication channel load. Particularly, information loss affects the target classification module and, subsequently, the safety application performance. To enable reliable and robust CVS systems that mitigate the effect of information loss, we proposed a Context-Aware Target Classification (CA-TC) module coupled with a hybrid learning-based predictive modeling technique for CVS systems. The CA-TC consists of two modules: A Context-Aware Map (CAM), and a Hybrid Gaussian Process (HGP) prediction system. Consequently, the vehicle safety applications use the information from the CA-TC, making them more robust and reliable. The CAM leverages vehicles path history, road geometry, tracking, and prediction; and the HGP is utilized to provide accurate vehicles' trajectory predictions to compensate for data loss (due to communication congestion) or sensor measurements' inaccuracies. Based on offline real-world data, we learn a finite bank of driver models that represent the joint dynamics of the vehicle and the drivers' behavior. We combine offline training and online model updates with on-the-fly forecasting to account for new possible driver behaviors. Finally, our framework is validated using simulation and realistic driving scenarios to confirm its potential in enhancing the robustness and reliability of CVS systems.
translated by 谷歌翻译
为连接和自动化车辆(CAVS)开发安全性和效率应用需要大量的测试和评估。在关键和危险情况下对这些系统运行的需求使他们的评估负担非常昂贵,可能危险且耗时。作为替代方案,研究人员试图使用仿真平台研究和评估其算法和设计。建模驾驶员或人类操作员在骑士或其他与他们相互作用的车辆中的行为是此类模拟的主要挑战之一。虽然为人类行为开发完美的模型是一项具有挑战性的任务和一个开放的问题,但我们展示了用于驾驶员行为的模拟器中当前模型的显着增强。在本文中,我们为混合运输系统提供了一个模拟平台,其中包括人类驱动和自动化车辆。此外,我们分解了人类驾驶任务,并提供了模拟大规模交通情况的模块化方法,从而可以彻底研究自动化和主动的安全系统。通过互连模块的这种表示形式提供了一个可以调节的人解剖系统,以代表不同类别的驱动程序。此外,我们分析了一个大型驾驶数据集以提取表达参数,以最好地描述不同的驾驶特性。最后,我们在模拟器中重新创建了类似密集的交通情况,并对各种人类特异性和系统特异性因素进行了彻底的分析,研究了它们对交通网络性能和安全性的影响。
translated by 谷歌翻译
配子等合作驾驶系统,依靠沟通和信息交换,为每个特工创造情境感知。因此,控制部件的设计和性能与通信部件性能紧密耦合。车辆之间的信息流可以显着影响排的动态。因此,排列的性能和稳定性不仅取决于车辆的控制器,还取决于信息流拓扑(IFT)。 IFT可能导致某些排特性的限制,即稳定性和可扩展性。蜂窝载体 - 一切(C-V2X)已成为支持连接和自动化车辆应用的主要通信技术之一。由于数据包丢失,无线通道会创建随机链路中断和网络拓扑的变化。在本文中,我们使用一阶马尔可夫模型模拟车辆之间的通信链路,以捕获每个链路的普遍时间相关性。这些模型通过在系统设计阶段期间的通信链路更好地近似来实现性能评估。我们的方法是使用实​​验中的数据来使用马尔可夫链的分组间隙(IPG)和连续IPG状态的过渡概率矩阵来模拟分组间隙(IPG)。使用基于各种不同车辆密度和通信率的经验数据来源的模型从高保真模拟中收集训练数据。利用IPG模型,我们分析了一家车辆的平均方形稳定性,标准共识协议调整了理想的通信,并比较不同情景的性能下降。
translated by 谷歌翻译
合作驾驶依赖于车辆之间的沟通来造成情境感知。合作驾驶的一种应用是合作自适应巡航控制(CACC),其旨在提高公路运输安全性和能力。基于模型的通信(MBC)是一种新的范例,具有灵活的内容结构,用于广播联合车辆驱动程序预测行为模型。车辆复杂的动态和多样化的驾驶行为为建模过程增加了复杂性。高斯过程(GP)是一种完全数据驱动和非参数贝叶斯建模方法,可用作MBC的建模组件。通过为车辆产生本地GPS并将其超参数作为模型作为模型作为模型来向相邻车辆广播的知识来传播关于不确定性的知识。在该研究中,GP用于模拟每个车辆的速度轨迹,这允许车辆在通信损耗和/或低速率通信期间访问其前车辆的未来行为。此外,为了克服车辆排中的安全问题,考虑了每辆车的两种操作模式;免费下面和紧急制动。本文介绍了离散混合随机模型预测控制,该模型采用了系统模式以及GP模型捕获的不确定性。该拟议的控制设计方法找到了最佳的车速轨迹,其目的是实现具有小型车间隙的安全和有效的车辆,同时降低车辆对频繁通信的依赖性。模拟研究表明,考虑到具有低利率间歇性通信的上述通信范例的提出控制器的功效。
translated by 谷歌翻译
Differentiable Architecture Search (DARTS) has attracted considerable attention as a gradient-based Neural Architecture Search (NAS) method. Since the introduction of DARTS, there has been little work done on adapting the action space based on state-of-art architecture design principles for CNNs. In this work, we aim to address this gap by incrementally augmenting the DARTS search space with micro-design changes inspired by ConvNeXt and studying the trade-off between accuracy, evaluation layer count, and computational cost. To this end, we introduce the Pseudo-Inverted Bottleneck conv block intending to reduce the computational footprint of the inverted bottleneck block proposed in ConvNeXt. Our proposed architecture is much less sensitive to evaluation layer count and outperforms a DARTS network with similar size significantly, at layer counts as small as 2. Furthermore, with less layers, not only does it achieve higher accuracy with lower GMACs and parameter count, GradCAM comparisons show that our network is able to better detect distinctive features of target objects compared to DARTS.
translated by 谷歌翻译
Solving portfolio management problems using deep reinforcement learning has been getting much attention in finance for a few years. We have proposed a new method using experts signals and historical price data to feed into our reinforcement learning framework. Although experts signals have been used in previous works in the field of finance, as far as we know, it is the first time this method, in tandem with deep RL, is used to solve the financial portfolio management problem. Our proposed framework consists of a convolutional network for aggregating signals, another convolutional network for historical price data, and a vanilla network. We used the Proximal Policy Optimization algorithm as the agent to process the reward and take action in the environment. The results suggested that, on average, our framework could gain 90 percent of the profit earned by the best expert.
translated by 谷歌翻译
To date, no "information-theoretic" frameworks for reasoning about generalization error have been shown to establish minimax rates for gradient descent in the setting of stochastic convex optimization. In this work, we consider the prospect of establishing such rates via several existing information-theoretic frameworks: input-output mutual information bounds, conditional mutual information bounds and variants, PAC-Bayes bounds, and recent conditional variants thereof. We prove that none of these bounds are able to establish minimax rates. We then consider a common tactic employed in studying gradient methods, whereby the final iterate is corrupted by Gaussian noise, producing a noisy "surrogate" algorithm. We prove that minimax rates cannot be established via the analysis of such surrogates. Our results suggest that new ideas are required to analyze gradient descent using information-theoretic techniques.
translated by 谷歌翻译
Prostate cancer is the most common cancer in men worldwide and the second leading cause of cancer death in the United States. One of the prognostic features in prostate cancer is the Gleason grading of histopathology images. The Gleason grade is assigned based on tumor architecture on Hematoxylin and Eosin (H&E) stained whole slide images (WSI) by the pathologists. This process is time-consuming and has known interobserver variability. In the past few years, deep learning algorithms have been used to analyze histopathology images, delivering promising results for grading prostate cancer. However, most of the algorithms rely on the fully annotated datasets which are expensive to generate. In this work, we proposed a novel weakly-supervised algorithm to classify prostate cancer grades. The proposed algorithm consists of three steps: (1) extracting discriminative areas in a histopathology image by employing the Multiple Instance Learning (MIL) algorithm based on Transformers, (2) representing the image by constructing a graph using the discriminative patches, and (3) classifying the image into its Gleason grades by developing a Graph Convolutional Neural Network (GCN) based on the gated attention mechanism. We evaluated our algorithm using publicly available datasets, including TCGAPRAD, PANDA, and Gleason 2019 challenge datasets. We also cross validated the algorithm on an independent dataset. Results show that the proposed model achieved state-of-the-art performance in the Gleason grading task in terms of accuracy, F1 score, and cohen-kappa. The code is available at https://github.com/NabaviLab/Prostate-Cancer.
translated by 谷歌翻译
Algorithms that involve both forecasting and optimization are at the core of solutions to many difficult real-world problems, such as in supply chains (inventory optimization), traffic, and in the transition towards carbon-free energy generation in battery/load/production scheduling in sustainable energy systems. Typically, in these scenarios we want to solve an optimization problem that depends on unknown future values, which therefore need to be forecast. As both forecasting and optimization are difficult problems in their own right, relatively few research has been done in this area. This paper presents the findings of the ``IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling," held in 2021. We present a comparison and evaluation of the seven highest-ranked solutions in the competition, to provide researchers with a benchmark problem and to establish the state of the art for this benchmark, with the aim to foster and facilitate research in this area. The competition used data from the Monash Microgrid, as well as weather data and energy market data. It then focused on two main challenges: forecasting renewable energy production and demand, and obtaining an optimal schedule for the activities (lectures) and on-site batteries that lead to the lowest cost of energy. The most accurate forecasts were obtained by gradient-boosted tree and random forest models, and optimization was mostly performed using mixed integer linear and quadratic programming. The winning method predicted different scenarios and optimized over all scenarios jointly using a sample average approximation method.
translated by 谷歌翻译
Aiming at highly accurate object detection for connected and automated vehicles (CAVs), this paper presents a Deep Neural Network based 3D object detection model that leverages a three-stage feature extractor by developing a novel LIDAR-Camera fusion scheme. The proposed feature extractor extracts high-level features from two input sensory modalities and recovers the important features discarded during the convolutional process. The novel fusion scheme effectively fuses features across sensory modalities and convolutional layers to find the best representative global features. The fused features are shared by a two-stage network: the region proposal network (RPN) and the detection head (DH). The RPN generates high-recall proposals, and the DH produces final detection results. The experimental results show the proposed model outperforms more recent research on the KITTI 2D and 3D detection benchmark, particularly for distant and highly occluded instances.
translated by 谷歌翻译